7 research outputs found

    Assistive technology design and development for acceptable robotics companions for ageing years

    Get PDF
    © 2013 Farshid Amirabdollahian et al., licensee Versita Sp. z o. o. This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivs license, which means that the text may be used for non-commercial purposes, provided credit is given to the author.A new stream of research and development responds to changes in life expectancy across the world. It includes technologies which enhance well-being of individuals, specifically for older people. The ACCOMPANY project focuses on home companion technologies and issues surrounding technology development for assistive purposes. The project responds to some overlooked aspects of technology design, divided into multiple areas such as empathic and social human-robot interaction, robot learning and memory visualisation, and monitoring persons’ activities at home. To bring these aspects together, a dedicated task is identified to ensure technological integration of these multiple approaches on an existing robotic platform, Care-O-Bot®3 in the context of a smart-home environment utilising a multitude of sensor arrays. Formative and summative evaluation cycles are then used to assess the emerging prototype towards identifying acceptable behaviours and roles for the robot, for example role as a butler or a trainer, while also comparing user requirements to achieved progress. In a novel approach, the project considers ethical concerns and by highlighting principles such as autonomy, independence, enablement, safety and privacy, it embarks on providing a discussion medium where user views on these principles and the existing tension between some of these principles, for example tension between privacy and autonomy over safety, can be captured and considered in design cycles and throughout project developmentsPeer reviewe

    Learning to recognize human activities using soft labels

    No full text
    Human activity recognition system is of great importance in robot-care scenarios. Typically, training such a system requires activity labels to be both completely and accurately annotated. In this paper, we go beyond such restriction and propose a learning method that allow labels to be incomplete and uncertain. We introduce the idea of soft labels which allows annotators to assign multiple, and weighted labels to data segments. This is very useful in many situations, e.g., when the labels are uncertain, when part of the labels are missing, or when multiple annotators assign inconsistent labels. We formulate the activity recognition task as a sequential labeling problem. Latent variables are embedded in the model in order to exploit sub-level semantics for better estimation. We propose a max-margin framework which incorporate soft labels for learning the model parameters. The model is evaluated on two challenging datasets. To simulate the uncertainty in data annotation, we randomly change the labels for transition segments. The results show significant improvement over the state-of-the-art approach

    Latent Hierarchical Model for Activity Recognition

    No full text
    We present a novel hierarchical model for human activity recognition. In contrast to approaches that successively recognize actions and activities, our approach jointly models actions and activities in a unified framework, and their labels are simultaneously predicted. The model is embedded with a latent layer that is able to capture a richer class of contextual information in both state-state and observation-state pairs. Although loops are present in the model, the model has an overall linear-chain structure, where the exact inference is tractable. Therefore, the model is very efficient in both inference and learning. The parameters of the graphical model are learned with a Structured Support Vector Machine (Structured-SVM). A data-driven approach is used to initialize the latent variables; therefore, no manual labeling for the latent states is required. The experimental results from using two benchmark datasets show that our model outperforms the state-of-the-art approach, and our model is computationally more efficient

    Multi-user identification and efficient user approaching by fusing robot and ambient sensors

    No full text
    We describe a novel framework that combines an overhead camera and a robot RGB-D sensor for real-time people finding. Finding people is one of the most fundamental tasks in robot home care scenarios and it consists of many components, e.g. people detection, people tracking, face recognition, robot navigation. Researchers have extensively worked on these components, but as isolated tasks. Surprisingly, little attention has been paid on bridging these components as an entire system. In this paper, we integrate the separated modules seamlessly, and evaluate the entire system in a robot-care scenario. The results show largely improved efficiency when the robot system is aided by the localization system of the overhead cameras

    Expression-Invariant Age Estimation Using Structured Learning

    No full text
    In this paper, we investigate and exploit the influence of facial expressions on automatic age estimation. Different from existing approaches, our method jointly learns the age and expression by introducing a new graphical model with a latent layer between the age/expression labels and the features. This layer aims to learn the relationship between the age and expression and captures the face changes which induce the aging and expression appearance, and thus obtaining expression-invariant age estimation. Conducted on three age-expression datasets (FACES [1], Lifespan [2] and NEMO [3]), our experiments illustrate the improvement in performance when the age is jointly learnt with expression in comparison to expression-independent age estimation. The age estimation error is reduced by 14.43, 37.75 and 9.30 percent for the FACES, Lifespan and NEMO datasets respectively. The results obtained by our graphical model, without prior-knowledge of the expressions of the tested faces, are better than the best reported ones for all datasets. The flexibility of the proposed model to include more cues is explored by incorporating gender together with age and expression. The results show performance improvements for all cues

    Sensoren voor de herkenning van activiteiten in zorgtoepassingen

    No full text
    Symposium Intelligente systemen in de zorg, KU Leuven, 9 maart 2012
    corecore